Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
根据互补学习系统(CLS)理论〜\ cite {mcclelland1995there}在神经科学中,人类通过两个补充系统有效\ emph {持续学习}:一种快速学习系统,以海马为中心,用于海马,以快速学习细节,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验的快速学习, ;以及位于新皮层中的缓慢学习系统,以逐步获取有关环境的结构化知识。在该理论的激励下,我们提出\ emph {dualnets}(对于双网络),这是一个一般的持续学习框架,该框架包括一个快速学习系统,用于监督从特定任务和慢速学习系统中的模式分离代表学习,用于表示任务的慢学习系统 - 不可知论的一般代表通过自我监督学习(SSL)。双网符可以无缝地将两种表示类型纳入整体框架中,以促进在深层神经网络中更好地持续学习。通过广泛的实验,我们在各种持续的学习协议上展示了双网络的有希望的结果,从标准离线,任务感知设置到具有挑战性的在线,无任务的场景。值得注意的是,在Ctrl〜 \ Cite {veniat2020202020202020202020202020202020202020202020202020202020202021- coite {ostapenko2021-continual}的基准中。此外,我们进行了全面的消融研究,以验证双nets功效,鲁棒性和可伸缩性。代码可在\ url {https://github.com/phquang/dualnet}上公开获得。
translated by 谷歌翻译
减少甲烷排放对于缓解全球变暖至关重要。为了将甲烷排放归因于其来源,有必要综合的甲烷源基础设施数据集。深入学习远程感知的图像的最新进展有可能识别甲烷源的位置和特征,但是缺乏公开可用的数据,可以使机器学习研究人员和从业人员能够构建自动映射方法。为了帮助填补这一空白,我们在美国构建了一个称为Meter-ML的多传感器数据集,该数据集包含86,625个地理参考的NAIP,Sentinel-1和Sentinel-2图像,并在美国标记为有甲烷源设施,包括甲烷源设施,包括集中动物喂养操作,,,,,,,包括浓缩动物喂养操作,煤矿,垃圾填埋场,天然气加工厂,炼油厂和石油末端以及废水处理厂。我们尝试各种模型,以利用不同的空间分辨率,空间足迹,图像产品和光谱带。我们发现,我们的最佳模型在确定浓缩动物喂养操作的精确召回曲线下达到了一个面积,在专家标签的测试集上,用于识别浓缩动物饲养操作,用于油炼油厂和石油末端0.821,这表明有可能进行大规模映射。我们在https://stanfordmlgroup.github.io/projects/meter-ml/上免费提供仪表-ML,以支持自动化甲烷源映射的未来工作。
translated by 谷歌翻译
深度学习已被积极应用于预测时间序列,从而导致了大量新的自回归模型体系结构。然而,尽管基于时间指数的模型具有吸引人的属性,例如随着时间的推移是连续信号函数,导致表达平滑,但对它们的关注很少。实际上,尽管基于天真的深度指数模型比基于经典时间指数的模型的手动预定义函数表示表达得多,但由于缺乏电感偏见和时间序列的非平稳性,它们的预测不足以预测。在本文中,我们提出了DeepTime,这是一种基于深度指数的模型,该模型通过元学习公式训练,该公式克服了这些局限性,从而产生了有效而准确的预测模型。对现实世界数据集的广泛实验表明,我们的方法通过最先进的方法实现了竞争成果,并且高效。代码可从https://github.com/salesforce/deeptime获得。
translated by 谷歌翻译
公正的学习排名(ULTR)旨在从有偏见的用户点击日志中训练公正的排名模型。当前的大多数超级方法基于检查假设(EH),该假设假设可以将点击概率分解为两个标量函数,一种与排名特征有关,另一个与偏见因素有关。不幸的是,在实践中,特征,偏见因素和点击之间的相互作用很复杂,通常不能以这种独立的方式分解。使用EH拟合点击数据可能会导致模型错误指定并带来近似错误。在本文中,我们提出了一个基于向量的EH,并将点击概率作为两个向量函数的点产物提出。该解决方案由于其在拟合任意点击功能方面的普遍性而完成。基于它,我们提出了一个名为Vectorization的新型模型,以通过将嵌入在基础向量上投射到基础向量上,以适应性地学习相关性嵌入和排序文档。广泛的实验表明,我们的方法在复杂的真实点击以及简单的模拟点击上大大优于最新的超级方法。
translated by 谷歌翻译
近年来,已对变压器进行了积极研究,以预测。尽管在各种情况下经常显示出令人鼓舞的结果,但传统的变压器并非旨在充分利用时间序列数据的特征,因此遭受了一些根本的限制,例如,它们通常缺乏分解能力和解释性,并且既不有效,也没有有效的效率 - 期望。在本文中,我们提出了一种新颖的时间序列变压器体系结构Etsformer,它利用了指数平滑的原理,以改善变压器的时间序列预测。特别是,受到预测时间序列的经典指数平滑方法的启发,我们提出了新型的指数平滑注意力(ESA)和频率注意(FA),以替代香草变压器中的自我发挥机制,从而提高了准确性和效率。基于这些,我们使用模块化分解块重新设计了变压器体系结构,以便可以学会将时间序列数据分解为可解释的时间序列组件,例如水平,增长和季节性。对各种时间序列基准的广泛实验验证了该方法的功效和优势。代码可从https://github.com/salesforce/etsformer获得。
translated by 谷歌翻译
随着最近光学相变材料(PCM)的进步,光子内存中的神经科学大量已经证明了其在光学神经网络(ONN)设计中的优越性,具有接近零静态功耗,光时间延迟和紧凑的占地面积。然而,光子张量核心需要大量硬件重用来实现由于单核刻度有限的矩阵乘法。由此产生的大量PCM写入,导致严重的动态功率和压倒性的PCM,具有有限的写入耐久性。在这项工作中,我们提出了一种协同优化框架,努力,以最大限度地减少高效且可靠的光学内记忆中的整体写作工作。我们首先提出了写知感知培训,以鼓励重量块之间的相似性,并将其与训练后的优化方法相结合,以通过消除冗余写入来减少编程工作。实验表明,突出可以在具有可比性准确度的写入总数和动态功率的总数超过20倍。通过我们的努力,光子内记忆中的内蒙古大量将向机器学习中的可行应用前进,具有保存的准确性,级别更长的寿命和更低的编程能量。
translated by 谷歌翻译
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译
Incorporating contrastive learning objectives in sentence representation learning (SRL) has yielded significant improvements on many sentence-level NLP tasks. However, It is not well understood why contrastive learning works for learning sentence-level semantics. In this paper, we take a closer look at contrastive sentence representation learning through the lens of isotropy and learning dynamics. We interpret its success stories through the geometry of the representation shifts. We show that contrastive learning brings isotropy, and surprisingly learns to converge tokens to similar positions in the semantic space if given the signal that they are in the same sentence. Also, what we formalize as "spurious contextualization" is mitigated for semantically meaningful tokens, while augmented for functional ones. The embedding space is pushed toward the origin during training, with more areas now better defined. We ablate these findings by observing the learning dynamic with different training temperatures, batch sizes and pooling methods. With these findings, we aim to shed light on future designs of sentence representation learning methods.
translated by 谷歌翻译
Human organs constantly undergo anatomical changes due to a complex mix of short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently, prior knowledge of these factors will be beneficial when modeling their future state, i.e., via image generation. However, most of the medical image generation tasks only rely on the input from a single image, thus ignoring the sequential dependency even when longitudinal data is available. Sequence-aware deep generative models, where model input is a sequence of ordered and timestamped images, are still underexplored in the medical imaging domain that is featured by several unique challenges: 1) Sequences with various lengths; 2) Missing data or frame, and 3) High dimensionality. To this end, we propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images. Recently, diffusion models have shown promising results on high-fidelity image generation. Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model. The novel design enables learning longitudinal dependency even with missing data during training and allows autoregressive generation of a sequence of images during inference. Our extensive experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods.
translated by 谷歌翻译